skip to main content


Search for: All records

Creators/Authors contains: "Stone, Robert"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available June 1, 2024
  2. null (Ed.)
  3. null (Ed.)
    Populating the different types of data for a design repository is a difficult and time-consuming task. In this work, we report on techniques to automate the population of data related to product function. We explore a preliminary method to automate the generation of the functional chains of components from new products based on hierarchical data from an existing design repos- itory. We use datasets of various scale and specificity to find correlations between functions and flows for components of products in the Design Repos- itory. We use the results to predict the most likely functions and flows for a component, and then verify the accuracy of our algorithm by cross-validating a subsection of the data against the automation results. We apply existing grammar rules to order the functions and flows in a linear functional chain. Ultimately, these findings suggest methods for further automating the process of generating functional models. 
    more » « less
  4. null (Ed.)
    The purpose of this research is to find the optimum values for threshold variables used in a data mining and prediction algorithm. We also minimize and stratify a training set to find the optimum size based on how well it represents the whole dataset. Our specific focus is automating functional models, but the method can be applied to any dataset with a similar structure. We iterate through different values for two of the threshold variables in this process and cross-validate to calculate the average accuracy and find the optimum values for each variable. We optimize the training set by reducing the size by 78% and stratifying the data, whereby we achieve an accuracy that is 96% as good as the whole training set and takes 50% less time. These optimum values can be used to better predict the functions and flows of any future product based on its constituent components, which can be used to generate a complete functional model. 
    more » « less
  5. null (Ed.)
    Expanding on previous work of automating functional modeling, we have developed a more informed automation approach by assigning a weighted confidence metric to the wide variety of data in a design repository. Our work focuses on automating what we call linear functional chains, which are a component-based section of a full functional model. We mine the Design Repository to find correlations between component and function and flow. The automation algorithm we developed organizes these connections by component-function-flow frequency (CFF frequency), thus allowing the creation of linear functional chains. In previous work, we found that CFF frequency is the best metric in formulating the linear functional chain for an individual component; however, we found that this metric did not account for prevalence and consistency in the Design Repository data. To better understand our data, we developed a new metric, which we refer to as weighted confidence, to provide insight on the fidelity of the data, calculated by taking the harmonic mean of two metrics we extracted from our data, prevalence, and consistency. This method could be applied to any dataset with a wide range of individual occurrences. The contribution of this research is not to replace CFF frequency as a method of finding the most likely component-function-flow correlations but to improve the reliability of the automation results by providing additional information from the weighted confidence metric. Improving these automation results, allows us to further our ultimate objective of this research, which is to enable designers to automatically generate functional models for a product given constituent components. 
    more » « less
  6. During the design process, designers must satisfy customer needs while adequately developing engineering objectives. Among these engineering objectives, human considerations such as user interactions, safety, and comfort are indispensable during the design process. Nevertheless, traditional design engineering methodologies have significant limitations incorporating and understanding physical user interactions during early design phases. For example, Human Factors methods use checklists and guidelines applied to virtual or physical prototypes at later design stages to evaluate the concept. As a result, designers struggle to identify design deficiencies and potential failure modes caused by user-system interactions without relying on the use of detailed and costly prototypes. The Function-Human Error Design Method (FHEDM) is a novel approach to assess physical interactions during the early design stage using a functional basis approach. By applying FHEDM, designers can identify user interactions required to complete the functions of the system and to distinguish failure modes associated with such interactions, by establishing user-system associations using the information of the functional model. In this paper, we explore the use of data mining techniques to develop relationships between component, functions, flows and user interactions. We extract design information about components, functions, flows, and user interactions from a set of distinct coffee makers found in the Design Repository to build associations rules. Later, using a functional model of an electric kettle, we compared the functions, flows, and user interactions associations generated from data mining against the associations created by the authors, using the FHEDM. The results show notable similarities between the associations built from data mining and the FHEDM. We are suggesting that design information from a rich dataset can be used to extract association rules between functions, flows, components, and user interactions. This work will contribute to the design community by automating the identification of user interactions from a functional model. 
    more » « less
  7. Abstract Despite the importance of high-latitude surface energy budgets (SEBs) for land-climate interactions in the rapidly changing Arctic, uncertainties in their prediction persist. Here, we harmonize SEB observations across a network of vegetated and glaciated sites at circumpolar scale (1994–2021). Our variance-partitioning analysis identifies vegetation type as an important predictor for SEB-components during Arctic summer (June-August), compared to other SEB-drivers including climate, latitude and permafrost characteristics. Differences among vegetation types can be of similar magnitude as between vegetation and glacier surfaces and are especially high for summer sensible and latent heat fluxes. The timing of SEB-flux summer-regimes (when daily mean values exceed 0 Wm −2 ) relative to snow-free and -onset dates varies substantially depending on vegetation type, implying vegetation controls on snow-cover and SEB-flux seasonality. Our results indicate complex shifts in surface energy fluxes with land-cover transitions and a lengthening summer season, and highlight the potential for improving future Earth system models via a refined representation of Arctic vegetation types. 
    more » « less
  8. Abstract Many measurements at the LHC require efficient identification of heavy-flavour jets, i.e. jets originating from bottom (b) or charm (c) quarks. An overview of the algorithms used to identify c jets is described and a novel method to calibrate them is presented. This new method adjusts the entire distributions of the outputs obtained when the algorithms are applied to jets of different flavours. It is based on an iterative approach exploiting three distinct control regions that are enriched with either b jets, c jets, or light-flavour and gluon jets. Results are presented in the form of correction factors evaluated using proton-proton collision data with an integrated luminosity of 41.5 fb -1 at  √s = 13 TeV, collected by the CMS experiment in 2017. The closure of the method is tested by applying the measured correction factors on simulated data sets and checking the agreement between the adjusted simulation and collision data. Furthermore, a validation is performed by testing the method on pseudodata, which emulate various mismodelling conditions. The calibrated results enable the use of the full distributions of heavy-flavour identification algorithm outputs, e.g. as inputs to machine-learning models. Thus, they are expected to increase the sensitivity of future physics analyses. 
    more » « less